Explainable Recommendations and Calibrated Trust: Two Systematic User Errors

نویسندگان

چکیده

The increased adoption of collaborative human-artificial intelligence decision-making tools triggered a need to explain recommendations for safe and effective collaboration. We explore how users interact with explanations why trust-calibration errors occur, taking clinical decision-support systems as case study.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Using Adjective Features from User Reviews to Generate Higher Quality and Explainable Recommendations

Recommender systems have played a significant role in alleviating the “information overload” problem. Existing Collaborative Filtering approaches face the data sparsity problem and transparency problem, and the contentbased approaches suffer the problem of insufficient attributes. In this paper, we show that abundant adjective features embedded in user reviews can be used to characterize movies...

متن کامل

Explainable Entity-based Recommendations with Knowledge Graphs

Explainable recommendation is an important task. Many methods have been proposed which generate explanations from the content and reviews written for items. When review text is unavailable, generating explanations is still a hard problem. In this paper, we illustrate how explanations can be generated in such a scenario by leveraging external knowledge in the form of knowledge graphs. Our method...

متن کامل

Trust and Recommendations

Collaboration, interaction and information sharing are the main driving forces of the current generation of web applications referred to as ‘Web 2.0’ [47]. Well-known examples of this emerging trend include weblogs (online diaries or journals for sharing ideas instantly), Friend-Of-A-Friend1 (FOAF) files (machine-readable documents describing basic properties of a person, including links betwee...

متن کامل

Learning Explainable User Sentiment and Preferences for Information Filtering

In the last decade, online social networks have enabled people to interact in many ways with each other and with content. The digital traces of such actions reveal people’s preferences towards online content such as news or products. These traces often result from interactions such as sharing or liking, but also from interactions in natural language. The continuous growth of the amount of conte...

متن کامل

Do You Get It? User-Evaluated Explainable BDI Agents

In this paper we focus on explaining to humans the behavior of autonomous agents, i.e., explainable agents. Explainable agents are useful for many reasons including scenario-based training (e.g. disaster training), tutor and pedagogical systems, agent development and debugging, gaming, and interactive storytelling. As the aim is to generate for humans plausible and insightful explanations, user...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: IEEE Computer

سال: 2021

ISSN: ['1558-0814', '0018-9162']

DOI: https://doi.org/10.1109/mc.2021.3076131